摘要 :
In the context of analyzing wafer maps, we present a novel approach to enable analytics to be driven by user queries. The analytic context includes two aspects: (1) grouping wafer maps based on their failure patterns and (2) for a...
展开
In the context of analyzing wafer maps, we present a novel approach to enable analytics to be driven by user queries. The analytic context includes two aspects: (1) grouping wafer maps based on their failure patterns and (2) for a failure pattern found at wafer probe, checking to see whether there is a correlation to the result from the final test (feedforward) and to the result from the E-test (feedback). We introduce language driven analytics and show how a formal language model in the backend can enable natural language queries in the frontend. The approach is applied to analyze test data from a recent product line, with interesting findings highlighted to explain the approach and its use.
收起
摘要 :
In this work, we consider learning a wafer plot recognizer where only one training sample is available. We introduce an approach called Manifestation Learning to enable the learning. The underlying technology utilizes the Variatio...
展开
In this work, we consider learning a wafer plot recognizer where only one training sample is available. We introduce an approach called Manifestation Learning to enable the learning. The underlying technology utilizes the Variational AutoEncoder (VAE) approach to construct a so-called Manifestation Space. The training sample is projected into this space and the recognition is achieved through a pre-trained model in the space. Using wafer probe test data from an automotive product line, this paper explains the learning approach, its feasibility and limitation.
收起
摘要 :
In this work, we consider learning a wafer plot recognizer where only one training sample is available. We introduce an approach called Manifestation Learning to enable the learning. The underlying technology utilizes the Variatio...
展开
In this work, we consider learning a wafer plot recognizer where only one training sample is available. We introduce an approach called Manifestation Learning to enable the learning. The underlying technology utilizes the Variational AutoEncoder (VAE) approach to construct a so-called Manifestation Space. The training sample is projected into this space and the recognition is achieved through a pre-trained model in the space. Using wafer probe test data from an automotive product line, this paper explains the learning approach, its feasibility and limitation.
收起
摘要 :
Large-scale distributed systems offer computational power at unprecedented levels. In the past, HPC users typically had access to relatively few individual supercomputers and, in general, would assign a one-to-one mapping of appli...
展开
Large-scale distributed systems offer computational power at unprecedented levels. In the past, HPC users typically had access to relatively few individual supercomputers and, in general, would assign a one-to-one mapping of applications to machines. Modern HPC users have simultaneous access to a large number of individual machines and are beginning to make use of all of them for single-application execution cycles. One method that application developers have devised in order to take advantage of such systems is to organize an entire application execution cycle as a workflow. The scheduling of such workflows has been the topic of a great deal of research in the past few years and, although very sophisticated algorithms have been devised, a very specific aspect of these distributed systems, namely that most supercomputing resources employ batch queue scheduling software, has heretofore been omitted from consideration, presumably because it is difficult to model accurately. In this work, we augmentan existing workflow scheduler through the introduction of methods which make accurate predictions of both the performance of the application on specific hardware, and the amount of time individual workflow tasks will spend waiting in batch queues. Our results show that although a workflow scheduler alone may choose correct task placement based on data locality or network connectivity, this benefit is often compromised by the fact that most jobs submitted to current systems must wait in overcommited batch queues for a significant portion of time. However, incorporating the enhancements we describe improves workflow execution time in settings where batch queues impose significant delays on constituent workflow tasks.
收起
摘要 :
Large-scale distributed systems offer computational power at unprecedented levels. In the past, HPC users typically had access to relatively few individual supercomputers and, in general, would assign a one-to-one mapping of appli...
展开
Large-scale distributed systems offer computational power at unprecedented levels. In the past, HPC users typically had access to relatively few individual supercomputers and, in general, would assign a one-to-one mapping of applications to machines. Modern HPC users have simultaneous access to a large number of individual machines and are beginning to make use of all of them for single-application execution cycles. One method that application developers have devised in order to take advantage of such systems is to organize an entire application execution cycle as a workflow. The scheduling of such workflows has been the topic of a great deal of research in the past few years and, although very sophisticated algorithms have been devised, a very specific aspect of these distributed systems, namely that most supercomputing resources employ batch queue scheduling software, has heretofore been omitted from consideration, presumably because it is difficult to model accurately. In this work, we augmentan existing workflow scheduler through the introduction of methods which make accurate predictions of both the performance of the application on specific hardware, and the amount of time individual workflow tasks will spend waiting in batch queues. Our results show that although a workflow scheduler alone may choose correct task placement based on data locality or network connectivity, this benefit is often compromised by the fact that most jobs submitted to current systems must wait in overcommited batch queues for a significant portion of time. However, incorporating the enhancements we describe improves workflow execution time in settings where batch queues impose significant delays on constituent workflow tasks.
收起
摘要 :
Interposer-based packaging is becoming a widespread methodology for tightly integrating multiple heterogeneous dies into a single package, with the potential to improve manufacturing yield and build larger-than-reticle-sized syste...
展开
Interposer-based packaging is becoming a widespread methodology for tightly integrating multiple heterogeneous dies into a single package, with the potential to improve manufacturing yield and build larger-than-reticle-sized systems. However, interposer integration also introduces possible communication bottlenecks and cost overheads that can outweigh these benefits. To avoid these drawbacks, the abundant interposer interconnect can be leveraged as network-on-chip interconnection fabric to provide high-bandwidth, low-latency communication between chiplets and memory stacks. This work investigates this new interposer design space of passive and active interposer technologies, network-on-chip topologies, and clocking schemes to determine the cost-optimal interposer architectures for a range of performance requirements.
收起
摘要 :
Interposer-based packaging is becoming a widespread methodology for tightly integrating multiple heterogeneous dies into a single package, with the potential to improve manufacturing yield and build larger-than-reticle-sized syste...
展开
Interposer-based packaging is becoming a widespread methodology for tightly integrating multiple heterogeneous dies into a single package, with the potential to improve manufacturing yield and build larger-than-reticle-sized systems. However, interposer integration also introduces possible communication bottlenecks and cost overheads that can outweigh these benefits. To avoid these drawbacks, the abundant interposer interconnect can be leveraged as network-on-chip interconnection fabric to provide high-bandwidth, low-latency communication between chiplets and memory stacks. This work investigates this new interposer design space of passive and active interposer technologies, network-on-chip topologies, and clocking schemes to determine the cost-optimal interposer architectures for a range of performance requirements.
收起
摘要 :
Today, beamlines at DOE light sources produce vast amounts of data that can easily outgrow local compute and storage capacity. Beamline operators need to find mechanisms that facilitate easy access for experimenters to their data ...
展开
Today, beamlines at DOE light sources produce vast amounts of data that can easily outgrow local compute and storage capacity. Beamline operators need to find mechanisms that facilitate easy access for experimenters to their data (with appropriate access control) but that host it externally, to quickly free up constrained local storage capacity. Here, we show how HPC and network facilities can be coupled to a light source facility to build an efficient, low-maintenance data distribution platform.
收起
摘要 :
Today, beamlines at DOE light sources produce vast amounts of data that can easily outgrow local compute and storage capacity. Beamline operators need to find mechanisms that facilitate easy access for experimenters to their data ...
展开
Today, beamlines at DOE light sources produce vast amounts of data that can easily outgrow local compute and storage capacity. Beamline operators need to find mechanisms that facilitate easy access for experimenters to their data (with appropriate access control) but that host it externally, to quickly free up constrained local storage capacity. Here, we show how HPC and network facilities can be coupled to a light source facility to build an efficient, low-maintenance data distribution platform.
收起
摘要 :
We present a novel approach where wafer map pattern analytics are driven by natural language queries. At the core is a semantic parser that translates a user query into a meaning representation comprising instructions to generate ...
展开
We present a novel approach where wafer map pattern analytics are driven by natural language queries. At the core is a semantic parser that translates a user query into a meaning representation comprising instructions to generate a summary plot. The allowable plot types are pre-defined which serve as an interface that communicates user intents to the analytics software backend. Application results on wafer maps from a recent production line are presented to explain the capabilities and benefits of the proposed approach.
收起